张量恢复是计算机视觉和机器学习中的重要问题。它通常使用张量排名的凸松弛和$ l_ {0} $ norm,即分别为核定标准和$ l_ {1} $ norm,以解决此类问题。已知凸的近似值会产生偏置的估计量。为了克服这个问题,采用并设计了相应的非凸照器。受到最近开发的矩阵等效最小值凸额(EMCP)定理的启发,本文确定了张量当量的最小值 - concave惩罚(TEMCP)的定理。张量当量MCP(TEMCP)作为非凸照正规器零件和等效加权张量$ \ gamma $ norm(EWTGN)作为低级别部分的构建,两者都可以实现权重适应性。同时,我们提出了两个相应的自适应模型,用于两个经典的张量恢复问题,即低级张量完成(LRTC)和张量鲁棒的主成分分析(TRPCA),其中优化算法基于交替的方向乘数(ADMM)。设计了这种新型的迭代自适应算法,可以产生更准确的张量恢复效果。对于张量的完成模型,考虑了多光谱图像(MSI),磁共振成像(MRI)和彩色视频(CV)数据,而对于张量的稳定性主成分分析模型,高光谱图像(HSI)在高斯噪声和盐和盐和盐和盐和盐和盐和盐和盐和盐和考虑了胡椒噪声。所提出的算法优于ARTS方法,并且通过实验保证其降低和收敛性。
translated by 谷歌翻译
张量稀疏建模是一种有希望的方法,在整个科学和工程学中,取得了巨大的成功。众所周知,实际应用中的各种数据通常由多种因素产生,因此使用张量表示包含多个因素内部结构的数据。但是,与矩阵情况不同,构建合理的稀疏度量张量是一项相对困难且非常重要的任务。因此,在本文中,我们提出了一种称为张量全功能度量(FFM)的新张量稀疏度度量。它可以同时描述张量的每个维度的特征信息以及两个维度之间的相关特征,并将塔克等级与张量管等级连接。这种测量方法可以更全面地描述张量的稀疏特征。在此基础上,我们建立了其非凸放松,并将FFM应用于低级张量完成(LRTC)和张量鲁棒的主成分分析(TRPCA)。提出了基于FFM的LRTC和TRPCA模型,并开发了两种有效的交替方向乘数法(ADMM)算法来求解所提出的模型。各种实际数值实验证实了超出最先进的方法的优势。
translated by 谷歌翻译
低等级张量完成(LRTC)问题引起了计算机视觉和信号处理的极大关注。如何获得高质量的图像恢复效果仍然是目前要解决的紧急任务。本文提出了一种新的张量$ l_ {2,1} $最小化模型(TLNM),该模型(TLNM)集成了总和核标准(SNN)方法,与经典的张量核定常(TNN)基于张量的张量完成方法不同,与$ L_ { 2,1} $ norm和卡塔尔里亚尔分解用于解决LRTC问题。为了提高图像的局部先验信息的利用率,引入了总变化(TV)正则化项,从而导致一类新的Tensor $ L_ {2,1} $ NORM Minimization,总变量模型(TLNMTV)。两个提出的模型都是凸,因此具有全局最佳解决方案。此外,我们采用交替的方向乘数法(ADMM)来获得每个变量的封闭形式解,从而确保算法的可行性。数值实验表明,这两种提出的算法是收敛性的,比较优于方法。特别是,当高光谱图像的采样率为2.5 \%时,我们的方法显着优于对比方法。
translated by 谷歌翻译
Accurate determination of a small molecule candidate (ligand) binding pose in its target protein pocket is important for computer-aided drug discovery. Typical rigid-body docking methods ignore the pocket flexibility of protein, while the more accurate pose generation using molecular dynamics is hindered by slow protein dynamics. We develop a tiered tensor transform (3T) algorithm to rapidly generate diverse protein-ligand complex conformations for both pose and affinity estimation in drug screening, requiring neither machine learning training nor lengthy dynamics computation, while maintaining both coarse-grain-like coordinated protein dynamics and atomistic-level details of the complex pocket. The 3T conformation structures we generate are closer to experimental co-crystal structures than those generated by docking software, and more importantly achieve significantly higher accuracy in active ligand classification than traditional ensemble docking using hundreds of experimental protein conformations. 3T structure transformation is decoupled from the system physics, making future usage in other computational scientific domains possible.
translated by 谷歌翻译
For Prognostics and Health Management (PHM) of Lithium-ion (Li-ion) batteries, many models have been established to characterize their degradation process. The existing empirical or physical models can reveal important information regarding the degradation dynamics. However, there is no general and flexible methods to fuse the information represented by those models. Physics-Informed Neural Network (PINN) is an efficient tool to fuse empirical or physical dynamic models with data-driven models. To take full advantage of various information sources, we propose a model fusion scheme based on PINN. It is implemented by developing a semi-empirical semi-physical Partial Differential Equation (PDE) to model the degradation dynamics of Li-ion-batteries. When there is little prior knowledge about the dynamics, we leverage the data-driven Deep Hidden Physics Model (DeepHPM) to discover the underlying governing dynamic models. The uncovered dynamics information is then fused with that mined by the surrogate neural network in the PINN framework. Moreover, an uncertainty-based adaptive weighting method is employed to balance the multiple learning tasks when training the PINN. The proposed methods are verified on a public dataset of Li-ion Phosphate (LFP)/graphite batteries.
translated by 谷歌翻译
Non-line-of-sight (NLOS) imaging aims to reconstruct the three-dimensional hidden scenes from the data measured in the line-of-sight, which uses photon time-of-flight information encoded in light after multiple diffuse reflections. The under-sampled scanning data can facilitate fast imaging. However, the resulting reconstruction problem becomes a serious ill-posed inverse problem, the solution of which is of high possibility to be degraded due to noises and distortions. In this paper, we propose two novel NLOS reconstruction models based on curvature regularization, i.e., the object-domain curvature regularization model and the dual (i.e., signal and object)-domain curvature regularization model. Fast numerical optimization algorithms are developed relying on the alternating direction method of multipliers (ADMM) with the backtracking stepsize rule, which are further accelerated by GPU implementation. We evaluate the proposed algorithms on both synthetic and real datasets, which achieve state-of-the-art performance, especially in the compressed sensing setting. All our codes and data are available at https://github.com/Duanlab123/CurvNLOS.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Reinforcement learning (RL) is one of the most important branches of AI. Due to its capacity for self-adaption and decision-making in dynamic environments, reinforcement learning has been widely applied in multiple areas, such as healthcare, data markets, autonomous driving, and robotics. However, some of these applications and systems have been shown to be vulnerable to security or privacy attacks, resulting in unreliable or unstable services. A large number of studies have focused on these security and privacy problems in reinforcement learning. However, few surveys have provided a systematic review and comparison of existing problems and state-of-the-art solutions to keep up with the pace of emerging threats. Accordingly, we herein present such a comprehensive review to explain and summarize the challenges associated with security and privacy in reinforcement learning from a new perspective, namely that of the Markov Decision Process (MDP). In this survey, we first introduce the key concepts related to this area. Next, we cover the security and privacy issues linked to the state, action, environment, and reward function of the MDP process, respectively. We further highlight the special characteristics of security and privacy methodologies related to reinforcement learning. Finally, we discuss the possible future research directions within this area.
translated by 谷歌翻译
Detecting abrupt changes in data distribution is one of the most significant tasks in streaming data analysis. Although many unsupervised Change-Point Detection (CPD) methods have been proposed recently to identify those changes, they still suffer from missing subtle changes, poor scalability, or/and sensitive to noise points. To meet these challenges, we are the first to generalise the CPD problem as a special case of the Change-Interval Detection (CID) problem. Then we propose a CID method, named iCID, based on a recent Isolation Distributional Kernel (IDK). iCID identifies the change interval if there is a high dissimilarity score between two non-homogeneous temporal adjacent intervals. The data-dependent property and finite feature map of IDK enabled iCID to efficiently identify various types of change points in data streams with the tolerance of noise points. Moreover, the proposed online and offline versions of iCID have the ability to optimise key parameter settings. The effectiveness and efficiency of iCID have been systematically verified on both synthetic and real-world datasets.
translated by 谷歌翻译
In the new era of personalization, learning the heterogeneous treatment effect (HTE) becomes an inevitable trend with numerous applications. Yet, most existing HTE estimation methods focus on independently and identically distributed observations and cannot handle the non-stationarity and temporal dependency in the common panel data setting. The treatment evaluators developed for panel data, on the other hand, typically ignore the individualized information. To fill the gap, in this paper, we initialize the study of HTE estimation in panel data. Under different assumptions for HTE identifiability, we propose the corresponding heterogeneous one-side and two-side synthetic learner, namely H1SL and H2SL, by leveraging the state-of-the-art HTE estimator for non-panel data and generalizing the synthetic control method that allows flexible data generating process. We establish the convergence rates of the proposed estimators. The superior performance of the proposed methods over existing ones is demonstrated by extensive numerical studies.
translated by 谷歌翻译